Goto

Collaborating Authors

 ai-generated content


Microsoft has a new plan to prove what's real and what's AI online

MIT Technology Review

Microsoft has a new plan to prove what's real and what's AI online A new proposal calls on social media and AI companies to adopt strict verification, but the company hasn't committed to following its own recommendations. There are the high-profile cases you may easily spot, like when White House officials recently shared a manipulated image of a protester in Minnesota and then mocked those asking about it. Other times, it slips quietly into social media feeds and racks up views, like the videos that Russian influence campaigns are currently spreading to discourage Ukrainians from enlisting. It is into this mess that Microsoft has put forward a blueprint, shared with, for how to prove what's real online. An AI safety research team at the company recently evaluated how methods for documenting digital manipulation are faring against today's most worrying AI developments, like interactive deepfakes and widely accessible hyperrealistic models. It then recommended technical standards that can be adopted by AI companies and social media platforms.


AI 'slop' is transforming social media - and a backlash is brewing

BBC News

AI'slop' is transforming social media - and a backlash is brewing Théodore remembers the AI slop that tipped him over the edge. The image was of two emaciated, impoverished South Asian children. For some reason, despite their boyish features they have thick beards. One of them had no hands and only one foot. The other was holding a sign saying it's his birthday and asking for likes.


Reddit overtakes TikTok in UK thanks to search algorithms and gen Z

The Guardian

Reddit is being touted as an antidote to AI-generated content. Reddit is being touted as an antidote to AI-generated content. Platform is now Britain's fourth most visited social media site as users seek out human-generated content Reddit, the online discussion platform, has overtaken TikTok as Britain's fourth most visited social media service, as search algorithms and gen Z have dramatically transformed its prominence. The platform has undergone huge growth over the last two years, with an 88% increase in the proportion of UK internet users it reaches. Three in five Brits online now encounter the site, up from a third in 2023, according to Ofcom .


Pinterest Users Are Tired of All the AI Slop

WIRED

A surge of AI-generated content is frustrating Pinterest users and left some questioning whether the platform still works at all. For five years, Caitlyn Jones has used Pinterest on a weekly basis to find recipes for her son. In September, Jones spotted a creamy chicken and broccoli slow-cooker recipe, sprinkled with golden cheddar and a pop of parsley. She quickly looked at the ingredients and added them to her grocery list. But just as she was about to start cooking, having already bought everything, one thing stood out: The recipe told her to start by "logging" the chicken into the slow cooker.


This year we were drowning in a sea of slick, nonsensical AI slop

New Scientist

There is no doubt that 2025 will be remembered as the year of slop. A popular term for incorrect, weird and often downright ugly AI-generated content, slop has rotted nearly every platform on the internet. Enough slop has accumulated over the past few years that scientists can now measure its effects on people over time. Researchers at the Massachusetts Institute of Technology found that people using large language models (LLMs) such as those behind ChatGPT to write essays show far less brain activity than those who don't. And then there are the potential ill-effects on our mental health, with reports that certain chatbots are encouraging people to believe in fantasies or conspiracies, as well as urging them to self-harm, and that they may trigger or worsen psychosis.


User Negotiations of Authenticity, Ownership, and Governance on AI-Generated Video Platforms: Evidence from Sora

Shen, Bohui, Bhatta, Shrikar, Ireebanije, Alex, Liu, Zexuan, Choudhry, Abhinav, Gumusel, Ece, Zhou, Kyrie Zhixuan

arXiv.org Artificial Intelligence

As AI-generated video platforms rapidly advance, ethical challenges such as copyright infringement emerge. This study examines how users make sense of AI-generated videos on OpenAI's Sora by conducting a qualitative content analysis of user comments. Through a thematic analysis, we identified four dynamics that characterize how users negotiate authenticity, authorship, and platform governance on Sora. First, users acted as critical evaluators of realism, assessing micro-details such as lighting, shadows, fluid motion, and physics to judge whether AI-generated scenes could plausibly exist. Second, users increasingly shifted from passive viewers to active creators, expressing curiosity about prompts, techniques, and creative processes. Text prompts were perceived as intellectual property, generating concerns about plagiarism and remixing norms. Third, users reported blurred boundaries between real and synthetic media, worried about misinformation, and even questioned the authenticity of other commenters, suspecting bot-generated engagement. Fourth, users contested platform governance: some perceived moderation as inconsistent or opaque, while others shared tactics for evading prompt censorship through misspellings, alternative phrasing, emojis, or other languages. Despite this, many users also enforced ethical norms by discouraging the misuse of real people's images or disrespectful content. Together, these patterns highlighted how AI-mediated platforms complicate notions of reality, creativity, and rule-making in emerging digital ecosystems. Based on the findings, we discuss governance challenges in Sora and how user negotiations inform future platform governance.


AI deepfakes of real doctors spreading health misinformation on social media

The Guardian

An investigation found that real video of medical professionals is being manipulated using AI. An investigation found that real video of medical professionals is being manipulated using AI. TikTok and other social media platforms are hosting AI-generated deepfake videos of doctors whose words have been manipulated to help sell supplements and spread health misinformation. The factchecking organisation Full Fact has uncovered hundreds of such videos featuring impersonated versions of doctors and influencers directing viewers to Wellness Nest, a US-based supplements firm. All the deepfakes involve real footage of a health expert taken from the internet.


AI Slop Is Ruining Reddit for Everyone

WIRED

Reddit is considered one of the most human spaces left on the internet, but mods and users are overwhelmed with slop posts in the most popular subreddits. A Reddit post about a bride who demands a wedding guest wear a specific, unflattering shade is sure to provoke rage, let alone one about a bridesmaid or mother of the groom who wants to wear white. A scenario where a parent asks someone on an airplane to switch seats so they can sit next to their young child is likely to invoke the same rush of anger. But those posts may trigger a Reddit moderator's annoyance for a different reason--they are common themes within a growing genre of AI -generated, fake posts. These are examples that spring to mind for Cassie, one of dozens of moderators for r/AmItheAsshole .


No, your favourite influencer hasn't got a dozen dachshund dogs. It's just AI

BBC News

No, your favourite influencer hasn't got a dozen dachshund dogs. When scrolling through social media recently, you might have noticed posts which seem a bit off. It's all AI generated and due to its low quality and its inauthenticity, it's being branded AI slop. Both social media users and content creators say they're worried that AI slop flooding feeds is leading to a less authentic online experience - and is drowning out real posts. But a new trend, which sees people adding AI-generated animals to original photographs, has encouraged some content creators to embrace AI.


AI use in American newspapers is widespread, uneven, and rarely disclosed

Russell, Jenna, Karpinska, Marzena, Akinode, Destiny, Thai, Katherine, Emi, Bradley, Spero, Max, Iyyer, Mohit

arXiv.org Artificial Intelligence

AI is rapidly transforming journalism, but the extent of its use in published newspaper articles remains unclear. We address this gap by auditing a large-scale dataset of 186K articles from online editions of 1.5K American newspapers published in the summer of 2025. Using Pangram, a state-of-the-art AI detector, we discover that approximately 9% of newly-published articles are either partially or fully AI-generated. This AI use is unevenly distributed, appearing more frequently in smaller, local outlets, in specific topics such as weather and technology, and within certain ownership groups. We also analyze 45K opinion pieces from Washington Post, New York Times, and Wall Street Journal, finding that they are 6.4 times more likely to contain AI-generated content than news articles from the same publications, with many AI-flagged op-eds authored by prominent public figures. Despite this prevalence, we find that AI use is rarely disclosed: a manual audit of 100 AI-flagged articles found only five disclosures of AI use. Overall, our audit highlights the immediate need for greater transparency and updated editorial standards regarding the use of AI in journalism to maintain public trust.